Recently, the success of pre-training in text domain has been fully extended to vision, audio, and cross-modal scenarios. The proposed pre-training models of different modalities are showing a rising trend of homogeneity in their model structures, which brings the opportunity to implement different pre-training models within a uniform framework. In this paper, we present TencentPretrain, a toolkit supporting pre-training models of different modalities. The core feature of TencentPretrain is the modular design. The toolkit uniformly divides pre-training models into 5 components: embedding, encoder, target embedding, decoder, and target. As almost all of common modules are provided in each component, users can choose the desired modules from different components to build a complete pre-training model. The modular design enables users to efficiently reproduce existing pre-training models or build brand-new one. We test the toolkit on text, vision, and audio benchmarks and show that it can match the performance of the original implementations.
translated by 谷歌翻译
姿势注册在视觉和机器人技术中至关重要。本文重点介绍了无初始化姿势注册的挑战性任务,最高为7DOF,用于均质和异质测量。虽然最近基于学习的方法显示了使用可区分求解器的希望,但它们要么依赖于启发式定义的对应关系,要么易于局部最小值。我们提出了一个可区分的相关(DPC)求解器,该求解器是全球收敛性且无对应的。当与简单的特征提取网络结合使用时,我们的一般框架DPCN ++允许使用任意初始化的多功能姿势注册。具体而言,特征提取网络首先从一对均质/异质测量值中学习致密特征网格。然后将这些特征网格转换为基于傅立叶变换和球形径向聚集的翻译和比例不变频谱表示形式,将翻译转换和从旋转中脱钩。接下来,使用DPC求解器在频谱中独立有效地估计旋转,比例和翻译。整个管道都是可区分和训练的端到端。我们评估了DCPN ++在多种注册任务上,以不同的输入方式,包括2D Bird的视图图像,3D对象和场景测量以及医疗图像。实验结果表明,DCPN ++的表现优于经典和基于学习的基础线,尤其是在部分观察到的异质测量方面。
translated by 谷歌翻译
如今,在人员重新识别(Reid)任务的真实数据面临隐私问题,例如,禁止DataSet Dukemtmc-Reid。因此,收集Reid任务的真实数据变得更难。同时,标签的劳动力成本仍然很高,进一步阻碍了Reid研究的发展。因此,许多方法转向为REID算法生成合成图像作为替代方而不是真实图像。然而,合成和真实图像之间存在不可避免的领域差距。在以前的方法中,生成过程基于虚拟场景,并且无法根据不同的目标实际场景自动更改其合成训练数据。为了处理这个问题,我们提出了一种新颖的目标感知一代管道,以产生称为Tagerson的合成人物图像。具体地,它涉及参数化渲染方法,其中参数是可控的,并且可以根据目标场景调整。在Tagperson中,我们从目标场景中提取信息,并使用它们来控制我们的参数化渲染过程以生成目标感知的合成图像,这将使目标域中的实图像保持较小的间隙。在我们的实验中,我们的目标感知的合成图像可以实现比MSMT17上的广义合成图像更高的性能,即秩1精度的47.5%与40.9%。我们将发布此工具包\脚注{\ noindent代码可用于\ href {https://github.com/tagperson/tagperson-blender} {https://github.com/tagperson/tagperson -brender}}为Reid社区以任何所需味道产生合成图像。
translated by 谷歌翻译
去耦时尚表示是指将空间和时间特征分解成尺寸无关的因素。尽管以前的基于RGB-D的运动识别方法通过紧密耦合的多模态时空表示来实现了有希望的性能,但由于紧密的时空缠绕的建模,它们仍然在小数据设置下遭受(i)优化困难;(ii)信息冗余通常包含与分类弱相关的大量边际信息; (iii)由晚期融合不足引起的多模态起峰型信息之间的低相互作用。为了缓解这些缺点,我们建议去除并循环基于RGB-D的运动识别的时空表示。具体而言,我们解开了学习时空表示的任务到3个子任务:(1)通过解耦的空间和时间建模网络学习高质量和维度独立特征。 (2)重新汇总解耦表示,以确定更强的时空依赖。 (3)引入跨型自适应后融合(CAPF)机制,用于从RGB-D数据捕获跨模态时空信息。这些新颖设计的无缝组合形成了强大的时空表示,而不是在四个公共运动数据集上的最先进的方法实现了更好的性能。我们的代码可在https://github.com/damo-cv/motionrgbd获得。
translated by 谷歌翻译
单眼视觉惯性进程(VIO)是机器人和自主驾驶中的关键问题。传统方法基于过滤或优化解决了此问题。在完全可解释的同时,他们依靠手动干扰和经验参数调整。另一方面,基于学习的方法可以进行端到端的培训,但需要大量的培训数据来学习数百万个参数。但是,非解剖和重型模型阻碍了概括能力。在本文中,我们提出了一个完全可解释的,可解释的鸟眼视图(BEV),用于具有本地平面运动的机器人的VIO模型,可以在没有深神经网络的情况下进行训练。具体而言,我们首先采用无知的卡尔曼滤波器作为可区分的层来预测音高和滚动,其中学会了噪声的协方差矩阵以滤除IMU原始数据的噪声。其次,采用了精制的音高和滚动,以使用可区分的摄像头投影来检索每个帧的重力对齐的BEV图像。最后,利用可区分的姿势估计器来估计BEV框架之间的剩余3 DOF姿势:导致5 DOF姿势估计。我们的方法允许学习通过姿势估计损失监督的协方差矩阵,表现出优于经验基准的绩效。关于合成和现实世界数据集的实验结果表明,我们的简单方法与最先进的方法具有竞争力,并在看不见的场景上很好地概括了。
translated by 谷歌翻译
In recent years, benefiting from the expressive power of Graph Convolutional Networks (GCNs), significant breakthroughs have been made in face clustering area. However, rare attention has been paid to GCN-based clustering on imbalanced data. Although imbalance problem has been extensively studied, the impact of imbalanced data on GCN- based linkage prediction task is quite different, which would cause problems in two aspects: imbalanced linkage labels and biased graph representations. The former is similar to that in classic image classification task, but the latter is a particular problem in GCN-based clustering via linkage prediction. Significantly biased graph representations in training can cause catastrophic over-fitting of a GCN model. To tackle these challenges, we propose a linkage-based doubly imbalanced graph learning framework for face clustering. In this framework, we evaluate the feasibility of those existing methods for imbalanced image classification problem on GCNs, and present a new method to alleviate the imbalanced labels and also augment graph representations using a Reverse-Imbalance Weighted Sampling (RIWS) strategy. With the RIWS strategy, probability-based class balancing weights could ensure the overall distribution of positive and negative samples; in addition, weighted random sampling provides diverse subgraph structures, which effectively alleviates the over-fitting problem and improves the representation ability of GCNs. Extensive experiments on series of imbalanced benchmark datasets synthesized from MS-Celeb-1M and DeepFashion demonstrate the effectiveness and generality of our proposed method. Our implementation and the synthesized datasets will be openly available on https://github.com/espectre/GCNs_on_imbalanced_datasets.
translated by 谷歌翻译
Late-life depression (LLD) is a highly prevalent mood disorder occurring in older adults and is frequently accompanied by cognitive impairment (CI). Studies have shown that LLD may increase the risk of Alzheimer's disease (AD). However, the heterogeneity of presentation of geriatric depression suggests that multiple biological mechanisms may underlie it. Current biological research on LLD progression incorporates machine learning that combines neuroimaging data with clinical observations. There are few studies on incident cognitive diagnostic outcomes in LLD based on structural MRI (sMRI). In this paper, we describe the development of a hybrid representation learning (HRL) framework for predicting cognitive diagnosis over 5 years based on T1-weighted sMRI data. Specifically, we first extract prediction-oriented MRI features via a deep neural network, and then integrate them with handcrafted MRI features via a Transformer encoder for cognitive diagnosis prediction. Two tasks are investigated in this work, including (1) identifying cognitively normal subjects with LLD and never-depressed older healthy subjects, and (2) identifying LLD subjects who developed CI (or even AD) and those who stayed cognitively normal over five years. To the best of our knowledge, this is among the first attempts to study the complex heterogeneous progression of LLD based on task-oriented and handcrafted MRI features. We validate the proposed HRL on 294 subjects with T1-weighted MRIs from two clinically harmonized studies. Experimental results suggest that the HRL outperforms several classical machine learning and state-of-the-art deep learning methods in LLD identification and prediction tasks.
translated by 谷歌翻译
Autonomous robotic surgery has advanced significantly based on analysis of visual and temporal cues in surgical workflow, but relational cues from domain knowledge remain under investigation. Complex relations in surgical annotations can be divided into intra- and inter-relations, both valuable to autonomous systems to comprehend surgical workflows. Intra- and inter-relations describe the relevance of various categories within a particular annotation type and the relevance of different annotation types, respectively. This paper aims to systematically investigate the importance of relational cues in surgery. First, we contribute the RLLS12M dataset, a large-scale collection of robotic left lateral sectionectomy (RLLS), by curating 50 videos of 50 patients operated by 5 surgeons and annotating a hierarchical workflow, which consists of 3 inter- and 6 intra-relations, 6 steps, 15 tasks, and 38 activities represented as the triplet of 11 instruments, 8 actions, and 16 objects, totaling 2,113,510 video frames and 12,681,060 annotation entities. Correspondingly, we propose a multi-relation purification hybrid network (MURPHY), which aptly incorporates novel relation modules to augment the feature representation by purifying relational features using the intra- and inter-relations embodied in annotations. The intra-relation module leverages a R-GCN to implant visual features in different graph relations, which are aggregated using a targeted relation purification with affinity information measuring label consistency and feature similarity. The inter-relation module is motivated by attention mechanisms to regularize the influence of relational features based on the hierarchy of annotation types from the domain knowledge. Extensive experimental results on the curated RLLS dataset confirm the effectiveness of our approach, demonstrating that relations matter in surgical workflow analysis.
translated by 谷歌翻译
Many real-world applications of language models (LMs), such as code autocomplete and writing assistance, involve human-LM interaction, but the main LM benchmarks are non-interactive, where a system produces output without human intervention. To evaluate human-LM interaction, we develop a framework, Human-AI Language-based Interaction Evaluation (H-LINE), that expands non-interactive evaluation along three dimensions, capturing (i) the interactive process, not only the final output; (ii) the first-person subjective experience, not just a third-party assessment; and (iii) notions of preference beyond quality. We then design five tasks ranging from goal-oriented to open-ended to capture different forms of interaction. On four state-of-the-art LMs (three variants of OpenAI's GPT-3 and AI21's J1-Jumbo), we find that non-interactive performance does not always result in better human-LM interaction and that first-person and third-party metrics can diverge, suggesting the importance of examining the nuances of human-LM interaction.
translated by 谷歌翻译
End-to-end Speech Translation (E2E ST) aims to translate source speech into target translation without generating the intermediate transcript. However, existing approaches for E2E ST degrade considerably when only limited ST data are available. We observe that an ST model's performance strongly correlates with its embedding similarity from speech and transcript. In this paper, we propose Word-Aligned COntrastive learning (WACO), a novel method for few-shot speech-to-text translation. Our key idea is bridging word-level representations for both modalities via contrastive learning. We evaluate WACO and other methods on the MuST-C dataset, a widely used ST benchmark. Our experiments demonstrate that WACO outperforms the best baseline methods by 0.7-8.5 BLEU points with only 1-hour parallel data. Code is available at https://anonymous.4open.science/r/WACO .
translated by 谷歌翻译